Given the huge impact that Online Social Networks (OSN) had in the way peopleget informed and form their opinion, they became an attractive playground formalicious entities that want to spread misinformation, and leverage theireffect. In fact, misinformation easily spreads on OSN and is a huge threat formodern society, possibly influencing also the outcome of elections, or evenputting people's life at risk (e.g., spreading "anti-vaccines" misinformation).Therefore, it is of paramount importance for our society to have some sort of"validation" on information spreading through OSN. The need for a wide-scalevalidation would greatly benefit from automatic tools. In this paper, we show that it is difficult to carry out an automaticclassification of misinformation considering only structural properties ofcontent propagation cascades. We focus on structural properties, because theywould be inherently difficult to be manipulated, with the the aim ofcircumventing classification systems. To support our claim, we carry out anextensive evaluation on Facebook posts belonging to conspiracy theories (asrepresentative of misinformation), and scientific news (representative offact-checked content). Our findings show that conspiracy content actuallyreverberates in a way which is hard to distinguish from the one scientificcontent does: for the classification mechanisms we investigated, classificationF1-score never exceeds 0.65 during content propagation stages, and is stillless than 0.7 even after propagation is complete.
展开▼